Inductive Transfer using Kernel Multitask Latent Analysis

نویسنده

  • Z. XIANG
چکیده

We develop the Kernel Multitask Latent Analysis (KMLA) method for modeling many-to-many relationships between inputs and responses, and show how it can be applied to inductive transfer problems in bioseparations. KMLA performs dimensionality reduction targeted towards a multitask loss function much like Kernel Partial Least Squares (KPLS). KPLS is limited to least squares multiple regression while KMLA is a more general approach that can utilize widely-used convex loss functions for inference tasks. KMLA achieves inductive transfer between tasks by forcing the tasks to share the same latent features. In the bioseparation problem, the goal is to predict the retention times for a novel anion-chromatography system; only a few retention times are known for the target systems, while many protein retention times are known for the related systems. KMLA is used with semi-supervised loss functions that do not require that all proteins have responses for all the systems. Results are presented for both regression and ranking losses. KMLA significantly outperforms both single task KMLA and KPLS, and the existing missing response algorithm for multitask PLS extended to kernels.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Multitask Learning 43 1

Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. This paper reviews prior work on MTL, presents new eviden...

متن کامل

Algorithms and Applications for Multitask Learning

Multitask Learning is an inductive transfer method that improves generalization by using domain information implicit in the training signals of related tasks as an inductive bias. It does this by learning multiple tasks in parallel using a shared representation. Mul-titask transfer in connectionist nets has already been proven. But questions remain about how often training data for useful extra...

متن کامل

Incorporating Prior Knowledge About Financial Markets Through Neural Multitask Learning

We present the systematic method of Multitask Learning for incorporating prior knowledge (hints) into the inductive learning system of neural networks. Multitask Learning is an inductive transfer method which uses domain information about related tasks as inductive bias to guide the learning process towards better solutions of the main problem. These tasks are presented to the learning system i...

متن کامل

Multi-Task Multiple Kernel Relationship Learning

This paper presents a novel multitask multiple kernel learning framework that efficiently learns the kernel weights leveraging the relationship across multiple tasks. The idea is to automatically infer this task relationship in the RKHS space corresponding to the given base kernels. The problem is formulated as a regularization-based approach called MultiTask Multiple Kernel Relationship Learni...

متن کامل

Multitask Learning: A Knowledge-Based Source of Inductive Bias

This paper suggests that it may be easier to learn several hard tasks at one time than to learn these same tasks separately. In effect, the information provided by the training signal for each task serves as a domain-specific inductive bias for the other tasks. Frequently the world gives us clusters of related tasks to learn. When it does not, it is often straightforward to create additional ta...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2005